47 research outputs found

    A methodology for efficient code optimizations and memory management

    Get PDF
    The key to optimizing software is the correct choice, order as well parameters of optimizations-transformations, which has remained an open problem in compilation research for decades for various reasons. First, most of the compilation subproblems-transformations are interdependent and thus addressing them separately is not effective. Second, it is very hard to couple the transformation parameters to the processor architecture (e.g., cache size and associativity) and algorithm characteristics (e.g. data reuse); therefore compiler designers and researchers either do not take them into account at all or do it partly. Third, the search space (all different transformation parameters) is very large and thus searching is impractical. In this paper, the above problems are addressed for data dominant affine loop kernels, delivering significant contributions. A novel methodology is presented that takes as input the underlying architecture details and algorithm characteristics and outputs the near-optimum parameters of six code optimizations in terms of either L1,L2,DDR accesses, execution time or energy consumption. The proposed methodology has been evaluated to both embedded and general purpose processors and for 6 well known algorithms, achieving high speedup as well energy consumption gain values over gcc compiler, hand written optimized code and Polly

    Distributed Simulation of High-Level Algebraic Petri Nets

    Get PDF
    In the field of Petri nets, simulation is an essential tool to validate and evaluate models. Conventional simulation techniques, designed for their use in sequential computers, are too slow if the system to simulate is large or complex. The aim of this work is to search for techniques to accelerate simulations exploiting the parallelism available in current, commercial multicomputers, and to use these techniques to study a class of Petri nets called high-level algebraic nets. These nets exploit the rich theory of algebraic specifications for high-level Petri nets: Petri nets gain a great deal of modelling power by representing dynamically changing items as structured tokens whereas algebraic specifications turned out to be an adequate and flexible instrument for handling structured items. In this work we focus on ECATNets (Extended Concurrent Algebraic Term Nets) whose most distinctive feature is their semantics which is defined in terms of rewriting logic. Nevertheless, ECATNets have two drawbacks: the occultation of the aspect of time and a bad exploitation of the parallelism inherent in the models. Three distributed simulation techniques have been considered: asynchronous conservative, asynchronous optimistic and synchronous. These algorithms have been implemented in a multicomputer environment: a network of workstations. The influence that factors such as the characteristics of the simulated models, the organisation of the simulators and the characteristics of the target multicomputer have in the performance of the simulations have been measured and characterised. It is concluded that synchronous distributed simulation techniques are not suitable for the considered kind of models, although they may provide good performance in other environments. Conservative and optimistic distributed simulation techniques perform well, specially if the model to simulate is complex or large - precisely the worst case for traditional, sequential simulators. This way, studies previously considered as unrealisable, due to their exceedingly high computational cost, can be performed in reasonable times. Additionally, the spectrum of possibilities of using multicomputers can be broadened to execute more than numeric applications

    Risk-driven proactive fault-tolerant operation of IaaS providers

    Get PDF
    In order to improve service execution in Clouds, the management of Cloud Infrastructure has to take measures to adhere to Service Level Agreements and Business Level Objectives, from the application layer through to how services are supported at the lowest hardware levels. In this paper a risk model methodology and holistic management approach is developed specific to the operation of the Cloud Infrastructure Provider and is applied through improvements to SLA fault tolerance in Cloud Infrastructure. Risk assessments are used to analyse execution specific data from the Cloud Infrastructure and linked to a business driven holistic management component that is part of a Cloud Manager. Initial results show improved eco-efficiency, virtual machine availability and reductions in SLA failure across the whole Cloud infrastructure by applying our combined risk-based fault tolerance approach.Postprint (author’s final draft

    TANGO: Transparent heterogeneous hardware Architecture deployment for eNergy Gain in Operation

    Get PDF
    The paper is concerned with the issue of how software systems actually use Heterogeneous Parallel Architectures (HPAs), with the goal of optimizing power consumption on these resources. It argues the need for novel methods and tools to support software developers aiming to optimise power consumption resulting from designing, developing, deploying and running software on HPAs, while maintaining other quality aspects of software to adequate and agreed levels. To do so, a reference architecture to support energy efficiency at application construction, deployment, and operation is discussed, as well as its implementation and evaluation plans.Comment: Part of the Program Transformation for Programmability in Heterogeneous Architectures (PROHA) workshop, Barcelona, Spain, 12th March 2016, 7 pages, LaTeX, 3 PNG figure

    Resource failures risk assessment modelling in distributed environments

    Get PDF
    Service providers offer access to resources and services in distributed environments such as Grids and Clouds through formal Service level Agreements (SLA), and need well-balanced infrastructures so that they can maximise the Quality of Service (QoS) they offer and minimise the number of SLA violations. We propose a mathematical model to predict the risk of failure of resources in such environments using a discrete-time analytical model driven by reliability functions fitted to observed data. The model relies on the resource historical data so as to predict the risk of failure for a given time interval. The model is evaluated by comparing the predicted risk of failure with the observed risk of failure, and is shown to accurately predict the resources risk of failure, allowing a service provider to selectively choose which SLA request to accept

    To Trust or Not to Trust? Developing Trusted Digital Spaces through Timely Reliable and Personalized Provenance

    Get PDF
    Organizations are increasingly dependent on data stored and processed by distributed, heterogeneous services to make critical, high-value decisions. However, these service-oriented computing environments are dynamic in nature and are becoming ever more complex systems of systems. In such evolving and dynamic eco-system infrastructures, knowing how data was derived is of significant importance in determining its validity and reliability. To address this, a number of advocates and theorists postulate that provenance is critical to building trust in data and the services that generated it as it provides evidence for data consumers to judge the integrity of the results. This paper presents a summary of the STRAPP (trusted digital Spaces through Timely Reliable And Personalised Provenance) project, which is designing and engineering mechanisms to achieve a holistic solution to a number of real-world service-based decision-support systems

    Energy-aware cost prediction and pricing of virtual machines in cloud computing environments

    Get PDF
    With the increasing cost of electricity, Cloud providers consider energy consumption as one of the major cost factors to be maintained within their infrastructure. Consequently, various proactive and reactive management mechanisms are used to efficiently manage the cloud resources and reduce the energy consumption and cost. These mechanisms support energy-awareness at the level of Physical Machines (PM) as well as Virtual Machines (VM) to make corrective decisions. This paper introduces a novel Cloud system architecture that facilitates an energy aware and efficient cloud operation methodology and presents a cost prediction framework to estimate the total cost of VMs based on their resource usage and power consumption. The evaluation on a Cloud testbed show that the proposed energy-aware cost prediction framework is capable of predicting the workload, power consumption and estimating total cost of the VMs with good prediction accuracy for various Cloud application workload patterns. Furthermore, a set of energy-based pricing schemes are defined, intending to provide the necessary incentives to create an energy-efficient and economically sustainable ecosystem. Further evaluation results show that the adoption of energy-based pricing by cloud and application providers creates additional economic value to both under different market conditions

    Risk driven Smart Home resource management using cloud services

    Get PDF
    In order to fully exploit the concept of Smart Home, challenges associated with multiple device management in consumer facing applications have to be addressed. Specific to this is the management of resource usage in the home via the improved utilization of devices, this is achieved by integration with the wider environment they operate in. The traditional model of the isolated device no longer applies, the future home will be connected with services provided by third parties ranging from supermarkets to domestic appliance manufacturers. In order to achieve this risk based integrated device management and contextualization is explored in this paper based on the cloud computing model. We produce an architecture and evaluate risk models to assist in this management of devices from a security, privacy and resource management perspective. We later propose an expansion on the risk based approach to wider data sharing between the home and external services using the key indicators of TREC (Trust, Risk, Eco-efficiency and Cost). The paper contributes to Smart Home research by defining how Cloud service management principles of risk and contextualization for virtual machines can produce solutions to emerging challenges facing a new generation of Smart Home devices

    Energy-Aware Profiling for Cloud Computing Environments

    Get PDF
    Cloud Computing has changed the way in which people use the IT resources today. Now, instead of buying their own IT resources, they can use the services offered by Cloud Computing with reasonable costs based on a "pay-per-use" model. However, with the wide adoption of Cloud Computing, the costs for maintaining the Cloud infrastructure have become a vital issue for the providers, especially with the large input of energy costs to underpin these resources. Thus, this paper proposes a system architecture that can be used for profiling the resources usage in terms of the energy consumption. From the profiled data, the application developers can enhance their energy-aware decisions when creating or optimising the applications to be more energy efficient. This paper also presents an adapted existing Cloud architecture to enable energy-aware profiling based on the proposed system. The results of the conducted experiments show energy-awareness at physical host and virtual machine levels
    corecore